6 research outputs found

    PRODUCING AN ON-CALL-HELP DISPATCHER GUIDE FOR THE ADVANCEMENT OF THE COSTA RICA FIRE DEPARTMENT

    Get PDF
    As Costa Rica becomes subject to a highly dynamic emergency scene, Cuerpo de Bomberos, the country\u27s fire department, is in a constant search for ways to improve its response services. This includes everything from improving firefighter training to updating technological devices. This report details our contribution to a digital on-call-help dispatcher guide to complement their efforts of improvement in the fire department. Our guide was created specifically for the dispatchers at the Office of Communications, Santo Domingo, the site of this project

    Preventing Discriminatory Decision-making in Evolving Data Streams

    Full text link
    Bias in machine learning has rightly received significant attention over the last decade. However, most fair machine learning (fair-ML) work to address bias in decision-making systems has focused solely on the offline setting. Despite the wide prevalence of online systems in the real world, work on identifying and correcting bias in the online setting is severely lacking. The unique challenges of the online environment make addressing bias more difficult than in the offline setting. First, Streaming Machine Learning (SML) algorithms must deal with the constantly evolving real-time data stream. Second, they need to adapt to changing data distributions (concept drift) to make accurate predictions on new incoming data. Adding fairness constraints to this already complicated task is not straightforward. In this work, we focus on the challenges of achieving fairness in biased data streams while accounting for the presence of concept drift, accessing one sample at a time. We present Fair Sampling over Stream (FS2FS^2), a novel fair rebalancing approach capable of being integrated with SML classification algorithms. Furthermore, we devise the first unified performance-fairness metric, Fairness Bonded Utility (FBU), to evaluate and compare the trade-off between performance and fairness of different bias mitigation methods efficiently. FBU simplifies the comparison of fairness-performance trade-offs of multiple techniques through one unified and intuitive evaluation, allowing model designers to easily choose a technique. Overall, extensive evaluations show our measures surpass those of other fair online techniques previously reported in the literature
    corecore